Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 5573, 2024 03 06.
Artigo em Inglês | MEDLINE | ID: mdl-38448446

RESUMO

To navigate through their immediate environment humans process scene information rapidly. How does the cascade of neural processing elicited by scene viewing to facilitate navigational planning unfold over time? To investigate, we recorded human brain responses to visual scenes with electroencephalography and related those to computational models that operationalize three aspects of scene processing (2D, 3D, and semantic information), as well as to a behavioral model capturing navigational affordances. We found a temporal processing hierarchy: navigational affordance is processed later than the other scene features (2D, 3D, and semantic) investigated. This reveals the temporal order with which the human brain computes complex scene information and suggests that the brain leverages these pieces of information to plan navigation.


Assuntos
Encéfalo , Percepção do Tempo , Humanos , Eletroencefalografia , Registros , Semântica
2.
Cognition ; 245: 105723, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38262271

RESUMO

According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants. Participants were first asked to draw typical versions of scene categories, as descriptors of their internal models. These drawings were converted into standardized 3d renders, which we used as stimuli in subsequent scene categorization experiments. Across two experiments, participants' scene categorization was more accurate for renders tailored to their own drawings compared to renders based on others' drawings or copies of scene photographs, suggesting that scene perception is determined by a match with idiosyncratic internal models. Using a deep neural network to computationally evaluate similarities between scene renders, we further demonstrate that graded similarity to the render based on participants' own typical drawings (and thus to their internal model) predicts categorization performance across a range of candidate scenes. Together, our results showcase the potential of a new method for understanding individual differences - starting from participants' personal expectations about the structure of real-world scenes.


Assuntos
Individualidade , Reconhecimento Visual de Modelos , Humanos , Redes Neurais de Computação , Percepção Visual , Estimulação Luminosa/métodos
3.
Sci Adv ; 9(45): eadi2321, 2023 11 10.
Artigo em Inglês | MEDLINE | ID: mdl-37948520

RESUMO

During naturalistic vision, the brain generates coherent percepts by integrating sensory inputs scattered across the visual field. Here, we asked whether this integration process is mediated by rhythmic cortical feedback. In electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) experiments, we experimentally manipulated integrative processing by changing the spatiotemporal coherence of naturalistic videos presented across visual hemifields. Our EEG data revealed that information about incoherent videos is coded in feedforward-related gamma activity while information about coherent videos is coded in feedback-related alpha activity, indicating that integration is indeed mediated by rhythmic activity. Our fMRI data identified scene-selective cortex and human middle temporal complex (hMT) as likely sources of this feedback. Analytically combining our EEG and fMRI data further revealed that feedback-related representations in the alpha band shape the earliest stages of visual processing in cortex. Together, our findings indicate that the construction of coherent visual experiences relies on cortical feedback rhythms that fully traverse the visual hierarchy.


Assuntos
Córtex Visual , Humanos , Retroalimentação , Córtex Visual/diagnóstico por imagem , Estimulação Luminosa/métodos , Visão Ocular , Percepção Visual , Imageamento por Ressonância Magnética/métodos , Mapeamento Encefálico/métodos
4.
J Cogn Neurosci ; 35(11): 1879-1897, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37590093

RESUMO

Humans effortlessly make quick and accurate perceptual decisions about the nature of their immediate visual environment, such as the category of the scene they face. Previous research has revealed a rich set of cortical representations potentially underlying this feat. However, it remains unknown which of these representations are suitably formatted for decision-making. Here, we approached this question empirically and computationally, using neuroimaging and computational modeling. For the empirical part, we collected EEG data and RTs from human participants during a scene categorization task (natural vs. man-made). We then related EEG data to behavior to behavior using a multivariate extension of signal detection theory. We observed a correlation between neural data and behavior specifically between ∼100 msec and ∼200 msec after stimulus onset, suggesting that the neural scene representations in this time period are suitably formatted for decision-making. For the computational part, we evaluated a recurrent convolutional neural network (RCNN) as a model of brain and behavior. Unifying our previous observations in an image-computable model, the RCNN predicted well the neural representations, the behavioral scene categorization data, as well as the relationship between them. Our results identify and computationally characterize the neural and behavioral correlates of scene categorization in humans.


Assuntos
Encéfalo , Reconhecimento Visual de Modelos , Humanos , Estimulação Luminosa/métodos , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos
5.
Neuroimage ; 272: 120053, 2023 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-36966853

RESUMO

Spatial attention helps us to efficiently localize objects in cluttered environments. However, the processing stage at which spatial attention modulates object location representations remains unclear. Here we investigated this question identifying processing stages in time and space in an EEG and fMRI experiment respectively. As both object location representations and attentional effects have been shown to depend on the background on which objects appear, we included object background as an experimental factor. During the experiments, human participants viewed images of objects appearing in different locations on blank or cluttered backgrounds while either performing a task on fixation or on the periphery to direct their covert spatial attention away or towards the objects. We used multivariate classification to assess object location information. Consistent across the EEG and fMRI experiment, we show that spatial attention modulated location representations during late processing stages (>150 ms, in middle and high ventral visual stream areas) independent of background condition. Our results clarify the processing stage at which attention modulates object location representations in the ventral visual stream and show that attentional modulation is a cognitive process separate from recurrent processes related to the processing of objects on cluttered backgrounds.


Assuntos
Córtex Visual , Humanos , Atenção , Imageamento por Ressonância Magnética , Percepção Visual , Reconhecimento Visual de Modelos
6.
J Neurosci ; 43(10): 1731-1741, 2023 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-36759190

RESUMO

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing. We focus on the ability of visuo-semantic models, consisting of human-generated labels of object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual reversal in the relative importance of DNN versus visuo-semantic features as ventral-stream object representations unfold over space and time. Although lower-level visual areas are better explained by DNN features starting early in time (at 66 ms after stimulus onset), higher-level cortical dynamics are best accounted for by visuo-semantic features starting later in time (at 146 ms after stimulus onset). Among the visuo-semantic features, object parts and basic categories drive the advantage over DNNs. These results show that a significant component of the variance unexplained by DNNs in higher-level cortical dynamics is structured and can be explained by readily nameable aspects of the objects. We conclude that current DNNs fail to fully capture dynamic representations in higher-level human visual cortex and suggest a path toward more accurate models of ventral-stream computations.SIGNIFICANCE STATEMENT When we view objects such as faces and cars in our visual environment, their neural representations dynamically unfold over time at a millisecond scale. These dynamics reflect the cortical computations that support fast and robust object recognition. DNNs have emerged as a promising framework for modeling these computations but cannot yet fully account for the neural dynamics. Using magnetoencephalography data acquired in human observers during object viewing, we show that readily nameable aspects of objects, such as 'eye', 'wheel', and 'face', can account for variance in the neural dynamics over and above DNNs. These findings suggest that DNNs and humans may in part rely on different object features for visual recognition and provide guidelines for model improvement.


Assuntos
Reconhecimento Visual de Modelos , Semântica , Masculino , Feminino , Humanos , Redes Neurais de Computação , Percepção Visual , Encéfalo , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos
7.
J Neurosci ; 43(3): 484-500, 2023 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-36535769

RESUMO

Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while human participants (female and male) viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital, ventral-temporal, and posterior parietal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together, our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings.SIGNIFICANCE STATEMENT When we see a line drawing, we effortlessly recognize it as an object in the world despite its simple and abstract style. Here we asked to what extent this correspondence in perception is reflected in the brain. To answer this question, we measured how neural processing of objects depicted as photographs and line drawings with varying levels of detail (from natural images to abstract line drawings) evolves over space and time. We find broad commonalities in the spatiotemporal dynamics and the neural representations underlying the perception of photographs and even abstract drawings. These results indicate a shared basic mechanism supporting recognition of drawings and natural images.


Assuntos
Reconhecimento Visual de Modelos , Percepção Visual , Humanos , Masculino , Feminino , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Percepção Visual/fisiologia , Imageamento por Ressonância Magnética/métodos , Lobo Parietal/fisiologia , Mapeamento Encefálico/métodos
8.
Curr Biol ; 32(24): 5422-5432.e6, 2022 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-36455560

RESUMO

Visual categorization is a human core cognitive capacity1,2 that depends on the development of visual category representations in the infant brain.3,4,5,6,7 However, the exact nature of infant visual category representations and their relationship to the corresponding adult form remains unknown.8 Our results clarify the nature of visual category representations from electroencephalography (EEG) data in 6- to 8-month-old infants and their developmental trajectory toward adult maturity in the key characteristics of temporal dynamics,2,9 representational format,10,11,12 and spectral properties.13,14 Temporal dynamics change from slowly emerging, developing representations in infants to quickly emerging, complex representations in adults. Despite those differences, infants and adults already partly share visual category representations. The format of infants' representations is visual features of low to intermediate complexity, whereas adults' representations also encode high-complexity features. Theta band activity contributes to visual category representations in infants, and these representations are shifted to the alpha/beta band in adults. Together, we reveal the developmental neural basis of visual categorization in humans, show how information transmission channels change in development, and demonstrate the power of advanced multivariate analysis techniques in infant EEG research for theory building in developmental cognitive science.


Assuntos
Encéfalo , Eletroencefalografia , Adulto , Humanos , Lactente , Análise Multivariada , Reconhecimento Visual de Modelos
9.
Neuroimage ; 264: 119754, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36400378

RESUMO

The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models' prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.


Assuntos
Mapeamento Encefálico , Percepção Visual , Humanos , Percepção Visual/fisiologia , Aprendizado de Máquina , Encéfalo/fisiologia , Eletroencefalografia
10.
Commun Biol ; 5(1): 1247, 2022 11 14.
Artigo em Inglês | MEDLINE | ID: mdl-36376446

RESUMO

Distinguishing animate from inanimate things is of great behavioural importance. Despite distinct brain and behavioural responses to animate and inanimate things, it remains unclear which object properties drive these responses. Here, we investigate the importance of five object dimensions related to animacy ("being alive", "looking like an animal", "having agency", "having mobility", and "being unpredictable") in brain (fMRI, EEG) and behaviour (property and similarity judgements) of 19 participants. We used a stimulus set of 128 images, optimized by a genetic algorithm to disentangle these five dimensions. The five dimensions explained much variance in the similarity judgments. Each dimension explained significant variance in the brain representations (except, surprisingly, "being alive"), however, to a lesser extent than in behaviour. Different brain regions sensitive to animacy may represent distinct dimensions, either as accessible perceptual stepping stones toward detecting whether something is alive or because they are of behavioural importance in their own right.


Assuntos
Encéfalo , Reconhecimento Visual de Modelos , Humanos , Reconhecimento Visual de Modelos/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Mapeamento Encefálico , Imageamento por Ressonância Magnética/métodos , Julgamento/fisiologia
11.
J Neurophysiol ; 127(6): 1622-1628, 2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35583972

RESUMO

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG (n = 48) and time-resolved multivariate pattern analysis to investigate 1) the time course with which object category information emerges in the auditory modality and 2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that 1) auditory object category representations can be reliably extracted from EEG signals and 2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects' category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code.NEW & NOTEWORTHY Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects' category membership.


Assuntos
Mapeamento Encefálico , Encéfalo , Percepção Auditiva , Cognição , Humanos , Reconhecimento Visual de Modelos , Estimulação Luminosa/métodos , Visão Ocular
12.
Dev Cogn Neurosci ; 54: 101094, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35248819

RESUMO

Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS. In this tutorial, we provide and describe code to implement time-resolved, within-subject MVPA with infant EEG data. An example implementation of time-resolved MVPA based on linear SVM classification is described, with accompanying code in Matlab and Python. Results from a test dataset indicated that in both infants and adults this method reliably produced above-chance accuracy for classifying stimuli images. Extensions of the classification analysis are presented including both geometric- and accuracy-based representational similarity analysis, implemented in Python. Common choices of implementation are presented and discussed. As the amount of artifact-free EEG data contributed by each participant is lower in studies of infants than in studies of children and adults, we also explore and discuss the impact of varying participant-level inclusion thresholds on resulting MVPA findings in these datasets.


Assuntos
Encéfalo , Neurociência Cognitiva , Adulto , Criança , Eletroencefalografia/métodos , Humanos , Lactente , Análise Multivariada , Neuroimagem/métodos
13.
PLoS Comput Biol ; 18(2): e1009837, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35120139

RESUMO

conceptual representations are critical for human cognition. Despite their importance, key properties of these representations remain poorly understood. Here, we used computational models of distributional semantics to predict multivariate fMRI activity patterns during the activation and contextualization of abstract concepts. We devised a task in which participants had to embed abstract nouns into a story that they developed around a given background context. We found that representations in inferior parietal cortex were predicted by concept similarities emerging in models of distributional semantics. By constructing different model families, we reveal the models' learning trajectories and delineate how abstract and concrete training materials contribute to the formation of brain-like representations. These results inform theories about the format and emergence of abstract conceptual representations in the human brain.


Assuntos
Encéfalo/fisiologia , Formação de Conceito/fisiologia , Semântica , Humanos , Imageamento por Ressonância Magnética
14.
Nat Hum Behav ; 6(6): 796-811, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35210593

RESUMO

To interact with objects in complex environments, we must know what they are and where they are in spite of challenging viewing conditions. Here, we investigated where, how and when representations of object location and category emerge in the human brain when objects appear on cluttered natural scene images using a combination of functional magnetic resonance imaging, electroencephalography and computational models. We found location representations to emerge along the ventral visual stream towards lateral occipital complex, mirrored by gradual emergence in deep neural networks. Time-resolved analysis suggested that computing object location representations involves recurrent processing in high-level visual cortex. Object category representations also emerged gradually along the ventral visual stream, with evidence for recurrent computations. These results resolve the spatiotemporal dynamics of the ventral visual stream that give rise to representations of where and what objects are present in a scene under challenging viewing conditions.


Assuntos
Reconhecimento Visual de Modelos , Córtex Visual , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Córtex Visual/diagnóstico por imagem
15.
J Cogn Neurosci ; 34(1): 4-15, 2021 12 06.
Artigo em Inglês | MEDLINE | ID: mdl-34705031

RESUMO

During natural vision, our brains are constantly exposed to complex, but regularly structured, environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.


Assuntos
Encéfalo , Reconhecimento Visual de Modelos , Humanos , Percepção Visual
16.
Neuroimage ; 240: 118365, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34233220

RESUMO

Looking for objects within complex natural environments is a task everybody performs multiple times each day. In this study, we explore how the brain uses the typical composition of real-world environments to efficiently solve this task. We recorded fMRI activity while participants performed two different categorization tasks on natural scenes. In the object task, they indicated whether the scene contained a person or a car, while in the scene task, they indicated whether the scene depicted an urban or a rural environment. Critically, each scene was presented in an "intact" way, preserving its coherent structure, or in a "jumbled" way, with information swapped across quadrants. In both tasks, participants' categorization was more accurate and faster for intact scenes. These behavioral benefits were accompanied by stronger responses to intact than to jumbled scenes across high-level visual cortex. To track the amount of object information in visual cortex, we correlated multi-voxel response patterns during the two categorization tasks with response patterns evoked by people and cars in isolation. We found that object information in object- and body-selective cortex was enhanced when the object was embedded in an intact, rather than a jumbled scene. However, this enhancement was only found in the object task: When participants instead categorized the scenes, object information did not differ between intact and jumbled scenes. Together, these results indicate that coherent scene structure facilitates the extraction of object information in a task-dependent way, suggesting that interactions between the object and scene processing pathways adaptively support behavioral goals.


Assuntos
Imageamento por Ressonância Magnética/métodos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Córtex Visual/diagnóstico por imagem , Córtex Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Análise Multivariada , Percepção Visual/fisiologia , Adulto Jovem
17.
Cereb Cortex ; 31(12): 5664-5675, 2021 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-34291294

RESUMO

Brain decoding can predict visual perception from non-invasive electrophysiological data by combining information across multiple channels. However, decoding methods typically conflate the composite and distributed neural processes underlying perception that are together present in the signal, making it unclear what specific aspects of the neural computations involved in perception are reflected in this type of macroscale data. Using MEG data recorded while participants viewed a large number of naturalistic images, we analytically decomposed the brain signal into its oscillatory and non-oscillatory components, and used this decomposition to show that there are at least three dissociable stimulus-specific aspects to the brain data: a slow, non-oscillatory component, reflecting the temporally stable aspect of the stimulus representation; a global phase shift of the oscillation, reflecting the overall speed of processing of specific stimuli; and differential patterns of phase across channels, likely reflecting stimulus-specific computations. Further, we show that common cognitive interpretations of decoding analysis, in particular about how representations generalize across time, can benefit from acknowledging the multicomponent nature of the signal in the study of perception.


Assuntos
Encéfalo , Percepção Visual , Encéfalo/fisiologia , Cabeça , Humanos , Estimulação Luminosa/métodos , Percepção Visual/fisiologia
18.
Neuroimage ; 239: 118314, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-34175428

RESUMO

Contextual information triggers predictions about the content ("what") of environmental stimuli to update an internal generative model of the surrounding world. However, visual information dynamically changes across time, and temporal predictability ("when") may influence the impact of internal predictions on visual processing. In this magnetoencephalography (MEG) study, we investigated how processing feature specific information ("what") is affected by temporal predictability ("when"). Participants (N = 16) were presented with four consecutive Gabor patches (entrainers) with constant spatial frequency but with variable orientation and temporal onset. A fifth target Gabor was presented after a longer delay and with higher or lower spatial frequency that participants had to judge. We compared the neural responses to entrainers where the Gabor orientation could, or could not be temporally predicted along the entrainer sequence, and with inter-entrainer timing that was constant (predictable), or variable (unpredictable). We observed suppression of evoked neural responses in the visual cortex for predictable stimuli. Interestingly, we found that temporal uncertainty increased expectation suppression. This suggests that in temporally uncertain scenarios the neurocognitive system invests less resources in integrating bottom-up information. Multivariate pattern analysis showed that predictable visual features could be decoded from neural responses. Temporal uncertainty did not affect decoding accuracy for early visual responses, with the feature specificity of early visual neural activity preserved across conditions. However, decoding accuracy was less sustained over time for temporally jittered than for isochronous predictable visual stimuli. These findings converge to suggest that the cognitive system processes visual features of temporally predictable stimuli in higher detail, while processing temporally uncertain stimuli may rely more heavily on abstract internal expectations.


Assuntos
Antecipação Psicológica/fisiologia , Magnetoencefalografia , Estimulação Luminosa , Tempo , Incerteza , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Adulto , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Análise Multivariada , Tempo de Reação , Adulto Jovem
19.
Dev Cogn Neurosci ; 45: 100860, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32932205

RESUMO

Tools from computational neuroscience have facilitated the investigation of the neural correlates of mental representations. However, access to the representational content of neural activations early in life has remained limited. We asked whether patterns of neural activity elicited by complex visual stimuli (animals, human body) could be decoded from EEG data gathered from 12-15-month-old infants and adult controls. We assessed pairwise classification accuracy at each time-point after stimulus onset, for individual infants and adults. Classification accuracies rose above chance in both groups, within 500 ms. In contrast to adults, neural representations in infants were not linearly separable across visual domains. Representations were similar within, but not across, age groups. These findings suggest a developmental reorganization of visual representations between the second year of life and adulthood and provide a promising proof-of-concept for the feasibility of decoding EEG data within-subject to assess how the infant brain dynamically represents visual objects.


Assuntos
Encéfalo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adolescente , Adulto , Feminino , Humanos , Lactente , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...